The effectiveness of GANs in producing images according to a specific visual domain has shown potential in unsupervised domain adaptation. Source labeled images have been modified to mimic target samples for training classifiers in the target domain, and inverse mappings from the target to the source domain have also been evaluated, without new image generation. In this paper we aim at getting the best of both worlds by introducing a symmetric mapping among domains. We jointly optimize bi-directional image transformations combining them with target self-labeling. We define a new class consistency loss that aligns the generators in the two directions, imposing to preserve the class identity of an image passing through both domain mappings. A detailed analysis of the reconstructed images, a thorough ablation study and extensive experiments on six different settings confirm the power of our approach.

From source to target and back: Symmetric Bi-Directional Adaptive GAN / Russo, Paolo; Carlucci, Fabio M.; Tommasi, Tatiana; Caputo, Barbara. - ELETTRONICO. - (2018), pp. 8099-8108. (Intervento presentato al convegno IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018) tenutosi a Salt Lake City, Utah; USA nel June 2018) [10.1109/CVPR.2018.00845].

From source to target and back: Symmetric Bi-Directional Adaptive GAN

Paolo Russo;Fabio M. Carlucci;Tatiana Tommasi;Barbara Caputo
2018

Abstract

The effectiveness of GANs in producing images according to a specific visual domain has shown potential in unsupervised domain adaptation. Source labeled images have been modified to mimic target samples for training classifiers in the target domain, and inverse mappings from the target to the source domain have also been evaluated, without new image generation. In this paper we aim at getting the best of both worlds by introducing a symmetric mapping among domains. We jointly optimize bi-directional image transformations combining them with target self-labeling. We define a new class consistency loss that aligns the generators in the two directions, imposing to preserve the class identity of an image passing through both domain mappings. A detailed analysis of the reconstructed images, a thorough ablation study and extensive experiments on six different settings confirm the power of our approach.
2018
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018)
computer vision; machine learning
04 Pubblicazione in atti di convegno::04b Atto di convegno in volume
From source to target and back: Symmetric Bi-Directional Adaptive GAN / Russo, Paolo; Carlucci, Fabio M.; Tommasi, Tatiana; Caputo, Barbara. - ELETTRONICO. - (2018), pp. 8099-8108. (Intervento presentato al convegno IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018) tenutosi a Salt Lake City, Utah; USA nel June 2018) [10.1109/CVPR.2018.00845].
File allegati a questo prodotto
File Dimensione Formato  
Russo_From-source_Postprint_2018.pdf

accesso aperto

Note: DOI: 10.1109/CVPR.2018.00845
Tipologia: Documento in Pre-print (manoscritto inviato all'editore, precedente alla peer review)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 2.6 MB
Formato Adobe PDF
2.6 MB Adobe PDF
Russo_From-source_2018.pdf

solo gestori archivio

Tipologia: Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 533.08 kB
Formato Adobe PDF
533.08 kB Adobe PDF   Contatta l'autore
Russo_From-source_titolo-Indice_2018.pdf

solo gestori archivio

Tipologia: Altro materiale allegato
Licenza: Tutti i diritti riservati (All rights reserved)
Dimensione 365.81 kB
Formato Adobe PDF
365.81 kB Adobe PDF   Contatta l'autore

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1092193
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 173
  • ???jsp.display-item.citation.isi??? 126
social impact